Learn R Programming

mvhtests (version 1.1)

Exponential empirical likelihood for a one sample mean vector hypothesis testing: Exponential empirical likelihood for a one sample mean vector hypothesis testing

Description

Exponential empirical likelihood for a one sample mean vector hypothesis testing.

Usage

eel.test1(x, mu, tol = 1e-06, R = 1)

Value

A list including:

p

The estimated probabilities.

lambda

The value of the Lagrangian parameter \(\lambda\).

iter

The number of iterations required by the newton-Raphson algorithm.

info

The value of the log-likelihood ratio test statistic along with its corresponding p-value.

runtime

The runtime of the process.

Arguments

x

A matrix containing Euclidean data.

mu

The hypothesized mean vector.

tol

The tolerance value used to stop the Newton-Raphson algorithm.

R

The number of bootstrap samples used to calculate the p-value. If R = 1 (default value), no bootstrap calibration is performed

Author

Michail Tsagris.

R implementation and documentation: Michail Tsagris mtsagris@uoc.gr.

Details

Exponential empirical likelihood or exponential tilting was first introduced by Efron (1981) as a way to perform a "tilted" version of the bootstrap for the one sample mean hypothesis testing. Similarly to the empirical likelihood, positive weights \(p_i\), which sum to one, are allocated to the observations, such that the weighted sample mean \({\bf \bar{x}}\) is equal to some population mean \(\pmb{\mu}_0\), under the \(H_0\). Under \(H_1\) the weights are equal to \(\frac{1}{n}\), where \(n\) is the sample size. Following Efron (1981), the choice of \(p_is\) will minimize the Kullback-Leibler distance from \(H_0\) to \(H_1\) $$ D\left(L_0,L_1\right)=\sum_{i=1}^np_i\log\left(np_i\right), $$ subject to the constraint \(\sum_{i=1}^np_i{\bf x}_i=\pmb{\mu}_0\). The probabilities take the form $$ p_i=\frac{e^{\pmb{\lambda}^T{\bf x}_i}}{\sum_{j=1}^ne^{\pmb{\lambda}^T{\bf x}_j}} $$ and the constraint becomes $$ \frac{\sum_{i=1}^ne^{\pmb{\lambda}^T{\bf x}_i}\left({\bf x}_i-\pmb{\mu}_0\right)}{\sum_{j=1}^ne^{\pmb{\lambda}^T{\bf x}_j}}=0 \Rightarrow \frac{\sum_{i=1}^n{\bf x}_ie^{\pmb{\lambda}^T{\bf x}_i}}{\sum_{j=1}^ne^{\pmb{\lambda}^T{\bf x}_j}}-\pmb{\mu}_0={\bf 0}. $$ A numerical search over \(\pmb{\lambda}\) is required. Under \(H_0\) \(\Lambda \sim \chi^2_d\), where \(d\) denotes the number of variables. Alternatively the bootstrap p-value may be computed.

References

Efron B. (1981) Nonparametric standard errors and confidence intervals. Canadian Journal of Statistics, 9(2): 139--158.

Jing B.Y. and Wood A.T.A. (1996). Exponential empirical likelihood is not Bartlett correctable. Annals of Statistics, 24(1): 365--369.

Owen A. B. (2001). Empirical likelihood. Chapman and Hall/CRC Press.

See Also

el.test1, hotel1T2, james, hotel2T2, maov, el.test2

Examples

Run this code
x <- as.matrix( iris[, 1:4] )
eel.test1(x, numeric(4) )
el.test1(x, numeric(4) )

Run the code above in your browser using DataLab